We think the field of rhetoric/writing needs to address—and quickly!—three current and rapidly expanding developments in artificial intelligence (AI) technology:
- AI chatbots are now operating in the workplace. The students graduating from our writing majors need to be prepared to write for bot communications, becoming scriptwriters for bots and writing instructors for bots. Just peruse the job ads out there—Wade & Wendy seeks “A Chatbot Copywriter,” Conversable seeks a “Conversation Designer/Writer,” iRobot Corporation seeks a “UX Writer.” We and our professional writing students need to know how to write for bots and how to teach bots to write, because it is bots who are increasingly handling basic writing/communication tasks such as customer service, technical documentation, and news and report writing.
- AI writing bots, aka “smart writers,” will soon do our writing for us. Very soon our students will be have bots available to “assist” their academic writing—and that will become a perfectly reasonable thing for them to be doing, not unlike their current use of spelling and grammar checkers. More companies and even individuals will have, for example, their own version of Heliograf, the news writing bot at the Washington Post, or Wibbitz, the AI video service USAToday uses.
- AI-based teachers (aka “smart teachers”), or at least teacher assistants, are now in use at some universities. AI teacher bots are already here and their capabilities are expanding rapidly. By coupling the machine capabilities of computer-scored writing with the conversational abilities of an AI chatbot, you get a teacher who can not only interact with students but provide grades as well. They will be capable of delivering writing instruction (of a sort, more on that in a moment), particularly for first-year composition courses—and of course, unlike their slower human counterparts, they will return papers immediately. Jill Watson, the virtual teaching assistant at Georgia Tech, will morph into the composition instructor and writing tutor at Michigan, Miami, and Mississippi State, etc.
No sense denying these developments. ”Human teachers can never be replaced!” No, they can, it is already happening, this is already in the works. Yes, you, and we, the college writing teacher can be replaced. (Actually, in the first-year writing courses, we are already being replaced: by the Advanced Placement writing test and by dual enrollment composition courses.) So the issue on the table is, What do we do about it? What does this mean for our field? In this short piece our aim is mainly to highlight the issue and start the discussion we desperately need to have.
AI’s Move to Deep Learning and Autonomy
First, it’s helpful to know a bit about where AI is going. The term artificial intelligence is applied to a lot of systems that aren’t that intelligent—yet. Siri, Alexa, and other personal assistants are advertised as “smart assistants,” but they are basically glorified search engines with natural language processing (NLP) and aural/oral capabilities. They are able to listen, respond, read, and write in a way, but only within the constraints of pre-programmed data sets. What many personal assistants and customer service bots currently lack is what many newer AI systems are developing: the capability for deep learning and autonomous learning through neural networks.
Neural networks makes it sound like AI is thinking like a human, but, of course, it’s not—at least not yet. It’s still all numbers and algorithms. But as programmers build in layers of neural networks and code machines so the machines can lay down their own neural networks, AI is changing so that the machines are learning for themselves based on their interactions with the environment. Yes, they’re still programmed at the front-end, but then they take their programming and run with it, sometimes in interesting ways (Libratus winning at Texas Hold ‘Em), but often in troublesome (Microsoft’s Tay) and downright tragic (Uber’s self-driving cars) ways. The question of whether the machine is really thinking or merely simulating thinking—the key question raised by John Searle in the Chinese Room Argument (1980)—might well be beside the point because, however they are processing information, AI agents are acting intelligently and making decisions in the world
Unlike in the United States, where AI development is literally killing people, the European Union has recognized that the realm of AI and robotic technology development needs guidance and rules. In 2017 the EU passed the European Civil Law Rules on Robotics, in which they define AI agents as interactive, autonomous robots that are able to adapt their behavior and actions to the environment. This statement is an important first step in laying down some guidelines for human-AI machine interaction.
As AI bots become more interactive, autonomous, and capable of self-processed adaptive learning, they will become better agents and communicators, better at such actions as speaking, reading, writing, and evaluating writing.
Chat Bots & Writing Bots
To what extent then can bots do our writing tasks for us, at a level of effectiveness and quality such that they could possibly pass a Turing Test for writing: that is, readers cannot tell the difference between the bot writing and the human writing the same task?
Well, that depends, of course, on the writing context, but bots have certainly passed the Turing Test informally for a wide-variety of contexts. For example, x.ai’s personal assistant scheduling bot, Amy/Andrew Ingram, in the context of email about meetings, is very often mistaken as human. In fact, some email correspondents have even flirted with Amy Ingram and sent her flowers and chocolates. Some poetry writing bots are already informally passing the Turing Test. Of greater worry, though, is that millions of people in the United States during the 2016 presidential election were unaware that the fellow “citizens” they were talking politics with in Twitter were actually bots (Bessie & Ferrara, 2016) deployed as part of troll farm attacks on US democracy.
In the academic realm, bots are already writing student papers. For example, Dr. Assignment promotes on its website that “A.I. technology will automatically write your assignment paper for you if you are too lazy or don’t know what to write.” As shown perhaps by this marketing (aiming at students it seems who’d be stoked with a C or D), the level of output isn’t expected to be high quality (yet), but as these programs become more sophisticated and nuanced, bot produced papers will certainly get better.
William Hart-Davidson (2018) thinks that we are not so “far away from a time when almost nobody composes a first draft of anything … you’ll take over at the revision stage” (p. 252). Is it cheating to have a robot write your paper for you? Well, some composition teachers thought it unfair, originally, when spell and grammar checkers came online. We managed to get through that crisis, finally arriving at the position that spell checkers are aids, but they are not 100% foolproof, and the human writer is still responsible for spelling. Will we treat draft writing bots the same way?
In relation to writing bots we need to be aware of AI-writing developments, be able to teach students how to write scripts for bots, how to write with bots, and, for those going into education, how to teach in a world with writing bots. We also need to be part of important conversations shaping the ethical use of writing bots, considering such questions as:
- When and how should bots be used as professional writers?
- Should there any limits/restrictions on the use of writing bots? That is, are there some purposes/contexts for which they should not be used?
- Should writing bots be used transparently? That is, should readers/users be alerted that they are communicating with a machine, or doesn’t that matter? (Spoiler alert: We are advocates of the position that urges transparency in the uses of bots, particularly for public writing.)
The field of rhetoric/writing has much to offer these discussions, given our field’s long history of considering writing in human contexts and in the realm of human-machine communication.
AI Instructors & Teaching Assistants
AI writing teachers might seem pretty far-fetched, but that too is developing faster than we may realize. Working with very broad brushstrokes, let’s start by identifying the distinct functions of teaching: designing the course curriculum, delivering course material, interacting with students, and grading/evaluating student work and performance.
Let’s start with grading papers. Machine-scoring of student writing has been around a long time and numerous studies in or field have shown its limitations (Ericsson & Haswell, 2006). But many of those studies were conducted on older, less smart systems that, as Les Perelman demonstrated, could be easily gamed by nonsense long essays with long sentences and multisyllabic key words that would score higher than well-written short essays with shorter sentences (Herrington & Moran, 2001; Perelman, 2012). As the technology develops and computers can juggle more variables, AI raters will become better readers and evaluators of writing.
But can AI systems interact effectively with students? That’s where the natural language processing comes in. Many students in computer science classes at Georgia Tech can’t figure out which of their teaching assistants (for an online course in computer science) are human and which are computers. Jill Watson and family has them fooled. These computer teaching assistants aren’t grading work (yet); they are just “automatically answering a variety of routine, frequently asked questions, and automatically replying to student introductions” (Goel & Polepeddi; see also Eicher, Polepeddi, & Goel, 2018). Yet they are doing so in ways similar enough to human responses in the same context that they are, rather effectively it seems, replacing human teaching assistants.
As we all face increased pressures to raise class sizes, to teach more students with fewer faculty, to get “lean” and “efficient” (buzzwords on our campus and we imagine on yours), it may be that we don’t have AI agents as teachers but what of AI teaching assistants to interact in limited ways? Is there a place for a Jill Watson in some of our courses and programs? What are the roles for human teachers vs machine teachers? These are questions we need to consider and be prepared to answer because without a doubt the landscape of education is changing. In particular, for us, the most immediate question is this: Could bots handle the teaching of first-year college composition? That possibility could be upon us in another five to ten years.
Call to Conversation and Action
We close this post by inviting ongoing conversation and action around these issues among ourselves in the rhetoric/writing field, with colleagues in other fields, and with working professionals.
In general we feel that a ludditic rejection of AI is not the answer. Instead, calling on the critical theory of Andrew Feenberg (1991) and others (Selber, 2004), we advocate for a critical engagement with technology development to insure that its designs and uses are truly smart, not just convenient or cost cutting, appropriate for our educational mission and goals and for our students. By turning our research and our teaching to AI-related areas, we will position ourselves and our students to be at the forefront of the changes, helping, we hope, to critically shape technology development, policy, and usage rather than merely reacting to it.
Note
This blog post is a portion of a longer article-in-progress that we are currently completing. We have also written about this topic in a chapter titled “AI Agents as Professional Communicators,” in our book Professional Communication and Network Interaction: A Rhetorical and Ethical Approach (Routledge, 2017).
References
Bessi, Alessandro, & Ferrara, Emilio. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday, 21(11). http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653a
Eicher, Bobbie, Polepeddi, Lalith, & Goel, Ashok. (2017). Jill Watson doesn’t care if you’re pregnant: Grounding AI ethics in empirical studies. (2017). Georgia Tech Library. http://dilab.gatech.edu/publications/jill-watson-doesnt-care-if-youre-pregnant-grounding-ai-ethics-in-empirical-studies/
Ericsson, Patricia Freitag, & Haswell, Richard H. (eds.). (2006). Machine scoring of student essays: Truth and consequences. Logan, UT: Utah State University Press.
Feenberg, Andrew. (1991). Critical theory of technology. Oxford: Oxford University Press.
Goel, Ashok, & Polepeddi, Lalith. (2016). Jill Watson: A virtual teaching assistant for online education. Georgia Tech Library. https://smartech.gatech.edu/handle/1853/59104
Hart-Davidson, William. (2018). Writing with robots and other curiosities of the age of machine rhetorics. In Jonathan Alexander & Jacqueline Rhodes (eds.), The Routledge handbook of digital writing and rhetoric (pp. 248-255). New York: Routledge.
Herrington, Anne, & Moran, Charles. (2001). What happens when machines read our students’ writing? College English, 63(4): 480-499.
McKee, Heidi A., & Porter, James E. (2017). Professional communication and network interaction: A rhetorical and ethical approach. New York: Routledge/Series in Rhetoric and Communication.
Perelman, Les. (2012). Construct validity, length, score, and time in holistically graded writing assessments: The case against automatic essay scoring (AES). In Charles Bazerman et al. (eds.), International advances in writing research: Cultures, places, measures. WAC Clearinghouse. https://wac.colostate.edu/books/perspectives/wrab2011/
Searle, John. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–57.
Selber, Stuart A. (2004). Multiliteracies for a digital age. Carbondale, IL: Southern Illinois University Press.
Turing, Alan. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.